16 research outputs found

    Hybrid machine learning architecture for automated detection and grading of retinal images for diabetic retinopathy

    Get PDF
    Purpose: Diabetic retinopathy is the leading cause of blindness, affecting over 93 million people. An automated clinical retinal screening process would be highly beneficial and provide a valuable second opinion for doctors worldwide. A computer-aided system to detect and grade the retinal images would enhance the workflow of endocrinologists. Approach: For this research, we make use of a publicly available dataset comprised of 3662 images. We present a hybrid machine learning architecture to detect and grade the level of diabetic retinopathy (DR) severity. We also present and compare simple transfer learning-based approaches using established networks such as AlexNet, VGG16, ResNet, Inception-v3, NASNet, DenseNet, and GoogLeNet for DR detection. For the grading stage (mild, moderate, proliferative, or severe), we present an approach of combining various convolutional neural networks with principal component analysis for dimensionality reduction and a support vector machine classifier. We study the performance of these networks under different preprocessing conditions. Results: We compare these results with various existing state-of-the-art approaches, which include single-stage architectures.We demonstrate that this architecture is more robust to limited training data and class imbalance. We achieve an accuracy of 98.4% for DR detection and an accuracy of 96.3% for distinguishing severity of DR, thereby setting a benchmark for future research efforts using a limited set of training images. Conclusions: Results obtained using the proposed approach serve as a benchmark for future research efforts. We demonstrate as a proof-of-concept that an automated detection and grading system could be developed with a limited set of images and labels. This type of independent architecture for detection and grading could be used in areas with a scarcity of trained clinicians based on the necessity

    A Computationally Efficient U-Net Architecture for Lung Segmentation in Chest Radiographs

    Get PDF
    Lung segmentation plays a crucial role in computer-aided diagnosis using Chest Radiographs (CRs). We implement a U-Net architecture for lung segmentation in CRs across multiple publicly available datasets. We utilize a private dataset with 160 CRs provided by the Riverain Medical Group for training purposes. A publicly available dataset provided by the Japanese Radiological Scientific Technology (JRST) is used for testing. The active shape model-based results would serve as the ground truth for both these datasets. In addition, we also study the performance of our algorithm on a publicly available Shenzhen dataset which contains 566 CRs with manually segmented lungs (ground truth). Our overall performance in terms of pixel-based classification is about 98.3% and 95.6% for a set of 100 CRs in Shenzhen dataset and 140 CRs in JRST dataset. We also achieve an intersection over union value of 0.95 at a computation time of 8 seconds for the entire suite of Shenzhen testing cases

    Analysis of Various Classification Techniques for Computer Aided Detection System of Pulmonary Nodules in CT

    Get PDF
    Lung cancer is the leading cause of cancer death in the United States. It usually exhibits its presence with the formation of pulmonary nodules. Nodules are round or oval-shaped growth present in the lung. Computed Tomography (CT) scans are used by radiologists to detect such nodules. Computer Aided Detection (CAD) of such nodules would aid in providing a second opinion to the radiologists and would be of valuable help in lung cancer screening. In this research, we study various feature selection methods for the CAD system framework proposed in FlyerScan. Algorithmic steps of FlyerScan include (i) local contrast enhancement (ii) automated anatomical segmentation (iii) detection of potential nodule candidates (iv) feature computation & selection and (v) candidate classification. In this paper, we study the performance of the FlyerScan by implementing various classification methods such as linear, quadratic and Fischer linear discriminant classifier. This algorithm is implemented using a publicly available Lung Image Database Consortium – Image Database Resource Initiative (LIDC-IDRI) dataset. 107 cases from LIDC-IDRI are handpicked in particular for this paper and performance of the CAD system is studied based on 5 example cases of Automatic Nodule Detection (ANODE09) database. This research will aid in improving the nodule detection rate in CT scans, thereby enhancing a patient’s chance of survival

    Multiframe Adaptive Wiener Filter Super-Resolution with JPEG2000-Compressed Images

    Get PDF
    Historically, Joint Photographic Experts Group 2000 (JPEG2000) image compression and multiframe super-resolution (SR) image processing techniques have evolved separately. In this paper, we propose and compare novel processing architectures for applying multiframe SR with JPEG2000 compression. We propose a modified adaptive Wiener filter (AWF) SR method and study its performance as JPEG2000 is incorporated in different ways. In particular, we perform compression prior to SR and compare this to compression after SR. We also compare both independent-frame compression and difference-frame compression approaches. We find that some of the SR artifacts that result from compression can be reduced by decreasing the assumed global signal-to-noise ratio (SNR) for the AWF SR method. We also propose a novel spatially adaptive SNR estimate for the AWF designed to compensate for the spatially varying compression artifacts in the input frames. The experimental results include the use of simulated imagery for quantitative analysis. We also include real-video results for subjective analysis

    Performance Analysis of Feature Selection Techniques for Support Vector Machine and its Application for Lung Nodule Detection

    Get PDF
    Lung cancer typically exhibits its presence with the formation of pulmonary nodules. Computer Aided Detection (CAD) of such nodules in CT scans would be of valuable help in lung cancer screening. Typical CAD system is comprised of a candidate detector and a feature-based classifier. In this research, we study and explore the performance of Support Vector Machine (SVM) based on a large set of features. We study the performance of SVM as a function of the number of features. Our results indicate that SVM is more robust and computationally faster with a large set of features and less prone to over-Training when compared to traditional classifiers. In addition, we also present a computationally efficient approach for selecting features for SVM. Results are presented for a publicly available Lung Nodule Analysis 2016 dataset. Our results based on 10-fold validation indicate that SVM based classification method outperforms the fisher linear discriminant classifier by 14.8%

    Performance analysis of a computer-aided detection system for lung nodules in CT at different slice thicknesses

    Get PDF
    We study the performance of a computer-aided detection (CAD) system for lung nodules in computed tomography (CT) as a function of slice thickness. In addition, we propose and compare three different training methodologies for utilizing nonhomogeneous thickness training data (i.e., composed of cases with different slice thicknesses). These methods are (1) aggregate training using the entire suite of data at their native thickness, (2) homogeneous subset training that uses only the subset of training data that matches each testing case, and (3) resampling all training and testing cases to a common thickness. We believe this study has important implications for how CT is acquired, processed, and stored. We make use of 192 CT cases acquired at a thickness of 1.25 mm and 283 cases at 2.5 mm. These data are from the publicly available Lung Nodule Analysis 2016 dataset. In our study, CAD performance at 2.5 mm is comparable with that at 1.25 mm and is much better than at higher thicknesses. Also, resampling all training and testing cases to 2.5 mm provides the best performance among the three training methods compared in terms of accuracy, memory consumption, and computational time

    Two-stage deep learning architecture for pneumonia detection and its diagnosis in chest radiographs

    Get PDF
    Approximately two million pediatric deaths occur every year due to Pneumonia. Detection and diagnosis of Pneumonia plays an important role in reducing these deaths. Chest radiography is one of the most commonly used modalities to detect pneumonia. In this paper, we propose a novel two-stage deep learning architecture to detect pneumonia and classify its type in chest radiographs. This architecture contains one network to classify images as either normal or pneumonic, and another deep learning network to classify the type as either bacterial or viral. In this paper, we study and compare the performance of various stage one networks such as AlexNet, ResNet, VGG16 and Inception-v3 for detection of pneumonia. For these networks, we employ transfer learning to exploit the wealth of information available from prior training. For the second stage, we find that transfer learning with these same networks tends to overfit the data. For this reason we propose a simpler CNN architecture for classification of pneumonic chest radiographs and show that it overcomes the overfitting problem. We further enhance the performance of our system in a novel way by incorporating lung segmentation using a U-Net architecture. We make use of a publicly available dataset comprising 5856 images (1583 - Normal, 4273 - Pneumonic). Among the pneumonia patients, 2780 patients are identified as bacteria type and the rest belongs to virus category. We test our proposed algorithm(s) on a set of 624 images and we achieve an area under the receiver operating characteristic curve of 0.996 for pneumonia detection. We also achieve an accuracy of 97.8% for classification of pneumonic chest radiographs thereby setting a new benchmark for both detection and diagnosis. We believe the proposed two-stage classification of chest radiographs for pneumonia detection and its diagnosis would enhance the workflow of radiologists

    Multiframe super resolution with JPEG2000 compressed images

    No full text
    This research is primarily focused on combining the techniques of Image processing (Super Resolution) with Image compression. Historically, Joint Photographic Experts Group 2000 (JPEG2000) image compression and multiframe Super Resolution (SR) image processing techniques have evolved separately. In this research, we propose and compare novel processing architectures for applying multiframe SR with JPEG2000 compression. We focus on the adaptive wiener filter (AWF) method of SR and study the SR performance as JPEG2000 is incorporated in different ways. In particular, we perform compression prior to SR and compare this to compression after SR. We also compare independent frame compression with difference frame compression. We find that the effects of compression can be reduced by increasing the signal-to-noise ratio in the correlation model for the AWF SR method, providing a novel approach to treating the compression artifacts. The experimental results include the use of simulated imagery for quantitative analysis. We also include real video results for subjective analysis. A chirp pattern is included in the scene to aid in the subjective analysis with regard to resolution and aliasing

    New classifier architecture and training methodologies for lung nodule detection in chest radiographs and computed tomography

    No full text
    Early detection of pulmonary lung nodules plays a significant role in the diagnosis of lung cancer. Radiologists use Computed Tomography (CT) and Chest Radiographs (CRs) to detect such nodules. In this research, we propose various pattern recognition algorithms to enhance the classification performance of the Computer Aided Detection (CAD) system for lung nodule detection in both modalities. We propose a novel optimized method of feature selection for clustering that would aid the performance of the classifier. We make use of an independent CR database for training purposes. Testing is implemented on a publicly available database created by the Standard Digital Image Database Project Team of the Scientific Committee of the Japanese Society of Radiological Technology (JRST). The JRST database comprises 154 CRs containing one radiologist confirmed nodule in each. We make use of 107 CT scans from publicly available dataset created by Lung Image Database Consortium (LIDC) for this study. We compare the performance of the cluster-classifier architecture to a single aggregate classifier architecture. Overall, with a specificity of 3 false positives per case on an average, we show a classifier performance boost of 7.7% for CRs and 5.0% for CT scans when compared to single aggregate classifier architecture. Furthermore, we study the performance of a CAD system in CT scans as a function of slice thickness. We believe this study has implication for how CT is acquired, processed and stored. We make use of CT cases acquired at a thickness of 1.25mm from the publicly available Lung Nodule Analysis 2016 (LUNA16) dataset for this research. We study the CAD performance at a native thickness of 1.25mm and various other down-sampled stages. Our study indicates that CAD performance at 2.5mm is comparable to 1.25mm and is much better than at higher thicknesses. In addition, we propose and compare three different training methodologies for utilizing non-homogenous thickness training (i.e., composed of cases with different slice thicknesses). We utilize cases acquired at 1.25mm and 2.5mm respectively from the LUNA16 dataset for this study. These methods include: (1) aggregate training using the entire suite of data at their native thickness, (2) homogeneous subset training that uses the subset of training data that matches each testing case; and (3) resampling all training and testing cases to a common thickness. Our experimental results indicate that resampling all training and testing cases to 2.5mm provides the best performance among the three training methods compared. Furthermore, the resampled 2.5mm data require less memory and process faster than the 1.25mm data

    Multiframe super resolution with JPEG2000 compressed images

    No full text
    This research is primarily focused on combining the techniques of Image processing (Super Resolution) with Image compression. Historically, Joint Photographic Experts Group 2000 (JPEG2000) image compression and multiframe Super Resolution (SR) image processing techniques have evolved separately. In this research, we propose and compare novel processing architectures for applying multiframe SR with JPEG2000 compression. We focus on the adaptive wiener filter (AWF) method of SR and study the SR performance as JPEG2000 is incorporated in different ways. In particular, we perform compression prior to SR and compare this to compression after SR. We also compare independent frame compression with difference frame compression. We find that the effects of compression can be reduced by increasing the signal-to-noise ratio in the correlation model for the AWF SR method, providing a novel approach to treating the compression artifacts. The experimental results include the use of simulated imagery for quantitative analysis. We also include real video results for subjective analysis. A chirp pattern is included in the scene to aid in the subjective analysis with regard to resolution and aliasing
    corecore